Goto

Collaborating Authors

 nature method


Glioma C6: A Novel Dataset for Training and Benchmarking Cell Segmentation

Malashin, Roman, Pashkevich, Svetlana, Ilyukhin, Daniil, Volkov, Arseniy, Yachnaya, Valeria, Denisov, Andrey, Mikhalkova, Maria

arXiv.org Artificial Intelligence

We present Glioma C6, a new open dataset for instance segmentation of glioma C6 cells, designed as both a benchmark and a training resource for deep learning models. The dataset comprises 75 high-resolution phase-contrast microscopy images with over 12,000 annotated cells, providing a realistic testbed for biomedical image analysis. It includes soma annotations and morphological cell categorization provided by biologists. Additional categorization of cells, based on morphology, aims to enhance the utilization of image data for cancer cell research. Glioma C6 consists of two parts: the first is curated with controlled parameters for benchmarking, while the second supports generalization testing under varying conditions. We evaluate the performance of several generalist segmentation models, highlighting their limitations on our dataset. Our experiments demonstrate that training on Glioma C6 significantly enhances segmentation performance, reinforcing its value for developing robust and generalizable models. The dataset is publicly available for researchers.


DM-QPMNET: Dual-modality fusion network for cell segmentation in quantitative phase microscopy

Chakraborty, Rajatsubhra, Espinosa-Momox, Ana, Haskin, Riley, Xu, Depeng, Porras-Aguilar, Rosario

arXiv.org Artificial Intelligence

ABSTRACT Cell segmentation in single-shot quantitative phase microscopy (ssQPM) faces challenges from traditional thresh-olding methods that are sensitive to noise and cell density, while deep learning approaches using simple channel concatenation fail to exploit the complementary nature of polarized intensity images and phase maps. We introduce DM-QPMNet, a dual-encoder network that treats these as distinct modalities with separate encoding streams. Our architecture fuses modality-specific features at intermediate depth via multi-head attention, enabling polarized edge and texture representations to selectively integrate complementary phase information. This content-aware fusion preserves training stability while adding principled multi-modal integration through dual-source skip connections and per-modality normalization at minimal overhead. Our approach demonstrates substantial improvements over monolithic concatenation and single-modality baselines, showing that modality-specific encoding with learnable fusion effectively exploits ssQPM's simultaneous capture of complementary illumination and phase cues for robust cell segmentation.


EAP4EMSIG -- Enhancing Event-Driven Microscopy for Microfluidic Single-Cell Analysis

Friederich, Nils, Sitcheu, Angelo Jovin Yamachui, Nassal, Annika, Yildiz, Erenus, Pesch, Matthias, Beichter, Maximilian, Scholtes, Lukas, Akbaba, Bahar, Lautenschlager, Thomas, Neumann, Oliver, Kohlheyer, Dietrich, Scharr, Hanno, Seiffarth, Johannes, Nöh, Katharina, Mikut, Ralf

arXiv.org Artificial Intelligence

Microfluidic Live-Cell Imaging yields data on microbial cell factories. However, continuous acquisition is challenging as high-throughput experiments often lack realtime insights, delaying responses to stochastic events. We introduce three components in the Experiment Automation Pipeline for Event-Driven Microscopy to Smart Microfluidic Single-Cell Analysis: a fast, accurate Deep Learning autofocusing method predicting the focus offset, an evaluation of real-time segmentation methods and a realtime data analysis dashboard. Our autofocusing achieves a Mean Absolute Error of 0.0226\textmu m with inference times below 50~ms. Among eleven Deep Learning segmentation methods, Cellpose~3 reached a Panoptic Quality of 93.58\%, while a distance-based method is fastest (121~ms, Panoptic Quality 93.02\%). All six Deep Learning Foundation Models were unsuitable for real-time segmentation.


Enhancing Deep Learning Based Structured Illumination Microscopy Reconstruction with Light Field Awareness

Shan, Long-Kun, Wang, Ze-Hao, Weng, Tong-Tian, Chen, Xiang-Dong, Sun, Fang-Wen

arXiv.org Artificial Intelligence

Structured illumination microscopy (SIM) is a pivotal technique for dynamic subcellular imaging in live cells. Conventional SIM reconstruction algorithms depend on accurately estimating the illumination pattern and can introduce artefacts when this estimation is imprecise. Although recent deep learning-based SIM reconstruction methods have improved speed, accuracy, and robustness, they often struggle with out-of-distribution data. To address this limitation, we propose an Awareness-of-Light-field SIM (AL-SIM) reconstruction approach that directly estimates the actual light field to correct for errors arising from data distribution shifts. Through comprehensive experiments on both simulated filament structures and live BSC1 cells, our method demonstrates a 7% reduction in the normalized root mean square error (NRMSE) and substantially lowers reconstruction artefacts. By minimizing these artefacts and improving overall accuracy, AL-SIM broadens the applicability of SIM for complex biological systems.


Artificial Intelligence for Microbiology and Microbiome Research

Wang, Xu-Wen, Wang, Tong, Liu, Yang-Yu

arXiv.org Machine Learning

Advancements in artificial intelligence (AI) have transformed many scientific fields, with microbiology and microbiome research now experiencing significant breakthroughs through machine learning and deep learning applications. This review provides a comprehensive overview of AI-driven approaches tailored for microbiology and microbiome studies, emphasizing both technical advancements and biological insights. We begin with an introduction to foundational AI techniques, including primary machine learning paradigms and various deep learning architectures, and offer guidance on choosing between machine learning and deep learning methods based on specific research goals. The primary section on application scenarios spans diverse research areas, from taxonomic profiling, functional annotation & prediction, microbe-X interactions, microbial ecology, metabolic modeling, precision nutrition, clinical microbiology, to prevention & therapeutics. Finally, we discuss challenges unique to this field, including the balance between interpretability and complexity, the "small n, large p" problem, and the critical need for standardized benchmarking datasets to validate and compare models. Together, this review underscores AI's transformative role in microbiology and microbiome research, paving the way for innovative methodologies and applications that enhance our understanding of microbial life and its impact on our planet and our health.


Cellpose+, a morphological analysis tool for feature extraction of stained cell images

Huaman, Israel A., Ghorabe, Fares D. E., Chumakova, Sofya S., Pisarenko, Alexandra A., Dudaev, Alexey E., Volova, Tatiana G., Ryltseva, Galina A., Ulasevich, Sviatlana A., Shishatskaya, Ekaterina I., Skorb, Ekaterina V., Zun, Pavel S.

arXiv.org Artificial Intelligence

Advanced image segmentation and processing tools present an opportunity to study cell processes and their dynamics. However, image analysis is often routine and time-consuming. Nowadays, alternative data-driven approaches using deep learning are potentially offering automatized, accurate, and fast image analysis. In this paper, we extend the applications of Cellpose, a state-of-the-art cell segmentation framework, with feature extraction capabilities to assess morphological characteristics. We also introduce a dataset of DAPI and FITC stained cells to which our new method is applied.


Season combinatorial intervention predictions with Salt & Peper

Gaudelet, Thomas, Del Vecchio, Alice, Carrami, Eli M, Cudini, Juliana, Kapourani, Chantriolnt-Andreas, Uhler, Caroline, Edwards, Lindsay

arXiv.org Artificial Intelligence

Interventions play a pivotal role in the study of complex biological systems. In drug discovery, genetic interventions (such as CRISPR base editing) have become central to both identifying potential therapeutic targets and understanding a drug's mechanism of action. With the advancement of CRISPR and the proliferation of genome-scale analyses such as transcriptomics, a new challenge is to navigate the vast combinatorial space of concurrent genetic interventions. Addressing this, our work concentrates on estimating the effects of pairwise genetic combinations on the cellular transcriptome. We introduce two novel contributions: Salt, a biologically-inspired baseline that posits the mostly additive nature of combination effects, and Peper, a deep learning model that extends Salt's additive assumption to achieve unprecedented accuracy. Our comprehensive comparison against existing state-of-the-art methods, grounded in diverse metrics, and our out-of-distribution analysis highlight the limitations of current models in realistic settings. This analysis underscores the necessity for improved modelling techniques and data acquisition strategies, paving the way for more effective exploration of genetic intervention effects.


The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions

Ma, Jun, Xie, Ronald, Ayyadhury, Shamini, Ge, Cheng, Gupta, Anubha, Gupta, Ritu, Gu, Song, Zhang, Yao, Lee, Gihun, Kim, Joonkee, Lou, Wei, Li, Haofeng, Upschulte, Eric, Dickscheid, Timo, de Almeida, José Guilherme, Wang, Yixin, Han, Lin, Yang, Xin, Labagnara, Marco, Rahi, Sahand Jamal, Kempster, Carly, Pollitt, Alice, Espinosa, Leon, Mignot, Tâm, Middeke, Jan Moritz, Eckardt, Jan-Niklas, Li, Wangkai, Li, Zhaoyang, Cai, Xiaochen, Bai, Bizhe, Greenwald, Noah F., Van Valen, David, Weisbart, Erin, Cimini, Beth A., Li, Zhuoshi, Zuo, Chao, Brück, Oscar, Bader, Gary D., Wang, Bo

arXiv.org Artificial Intelligence

Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyperparameters in different experimental settings. Here, we present a multi-modality cell segmentation benchmark, comprising over 1500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deeplearning algorithm that not only exceeds existing methods, but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging. Cell segmentation is a fundamental task that is universally required for biological image analysis across a large number of different experimental settings and imaging modalities. For example, in multiplexed fluorescence image-based cancer microenvironment analysis, cell segmentation is the prerequisite for the identification of tumor sub-types, composition, and organization, which can lead to important biological insights [1]-[3]. However, the development of a universal and automatic cell segmentation technique continues to pose significant challenges due to the extensive diversity observed in microscopy images. This diversity arises from variations in cell origins, microscopy types, staining techniques, and cell morphologies. Recent advances [4], [5] have successfully demonstrated the feasibility of automatic and precise cellular segmentation for specific microscopy image types and cell types, such as fluorescence and mass spectrometry images [6], [7], differential interference contrast images of platelets [8], bacteria images [9] and yeast images [10], [11], but the selection of appropriate segmentation models remains a non-trivial task for non-expert users in conventional biology laboratories. Efforts have been made towards the development of generalized cell segmentation algorithms [9], [12], [13]. However, these algorithms were primarily trained using datasets consisting of gray-scale images and two-channel fluorescent images, lacking the necessary diversity to ensure robust generalization across a wide range of imaging modalities. For example, the segmentation models have struggled to perform effectively on RGB images, such as bone marrow aspirate slides stained with Jenner-Giemsa. Furthermore, these models often require manual selection of both the model type and the specific image channel to be segmented, posing challenges for biologists with limited computational expertise. Biomedical image data science competitions have emerged as an effective way to accelerate the development of cutting-edge algorithms [14], [15].


Annotation of spatially resolved single-cell data with STELLAR - Nature Methods

#artificialintelligence

Accurate cell-type annotation from spatially resolved single cells is crucial to understand functional spatial biology that is the basis of tissue organization. However, current computational methods for annotating spatially resolved single-cell data are typically based on techniques established for dissociated single-cell technologies and thus do not take spatial organization into account. Here we present STELLAR, a geometric deep learning method for cell-type discovery and identification in spatially resolved single-cell datasets. STELLAR automatically assigns cells to cell types present in the annotated reference dataset and discovers novel cell types and cell states. STELLAR transfers annotations across different dissection regions, different tissues and different donors, and learns cell representations that capture higher-order tissue structures. We successfully applied STELLAR to CODEX multiplexed fluorescent microscopy data and multiplexed RNA imaging datasets. Within the Human BioMolecular Atlas Program, STELLAR has annotated 2.6 million spatially resolved single cells with dramatic time savings. STELLAR (spatial cell learning) is a geometric deep learning model that works with spatially resolved single-cell datasets to both assign cell types in unannotated datasets based on a reference dataset and discover new cell types.


Deep-learning–based image analysis is now just a click away

#artificialintelligence

Under an initiative by EPFL's Center for Imaging, a team of engineers from EPFL and Universidad Carlos III de Madrid have developed a plugin that makes it easier to incorporate artificial intelligence into image analysis for life-science research. The plugin, called deepImageJ, is described in a paper appearing today in Nature Methods. Over the past five years, image analysis has been shifting away from traditional mathematical- and observational-based methods towards data-driven processing and artificial intelligence. This major development is making the detection and identification of valuable information in images easier, faster, and increasingly automated--in just about every research field. When it comes to life science, deep-learning-, a sub-field of artificial intelligence, is showing an increasing potential for bioimage analysis.